Non-Cooperative and Deceptive Virtual Agents

نویسنده

  • David Traum
چکیده

Virtual agents that engage in dialogue with people can be used for a variety of purposes, including as service and information providers, tutors, confederates in psychology experiments, and role players in social training exercises. It seems reasonable that agents acting as service and information providers, and arguably as tutors, would be be truthful and cooperative. For other applications, however, such as roleplaying opponents, competitors, or more neutral characters in a training exercise, total honesty and cooperativeness would defeat the purpose of the exercise and fail to train people in coping with deception. The Institute for Creative Technologies at the University of Southern California has created several roleplaying characters, using different models of dialogue and uncooperative and deceptive behavior. This article briefly describes these models, as used in two different genres of dialogue agent: interviewing and negotiation. The models are presented in order from least to most sophisticated reasoning about deception. Most accounts of pragmatic reasoning in dialogue use versions of Grice’s cooperative principles and maxims1 to derive utterance meanings (which might be indirect in their expression). However, these maxims, such as “be truthful,” don’t cover situations in which conversationalists are deceptive or otherwise uncooperative, even though much human dialogue contains aspects of uncooperative behavior. Gricean accounts alone don’t adequately cover cases in which conversational participants aren’t cooperative—for example, why do they ever answer at all? The notion of discourse obligations2 differentiates the obligation to respond from the mechanism of response generation, which could be either cooperative, neutral, or deceptive.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Strategically Misleading the User: Building a Deceptive Virtual Suspect

Humans lie every day, from the least harmful lies to the most impactful ones. Therefore, in an attempt to design virtual agents endowed with advanced decision-making abilities, researchers not only focused their effort in designing cooperative and truthful agents but also deceptive and lying ones. In this paper we propose a model capable of engaging an agent in an uncooperative misleading dialo...

متن کامل

Lie to Me : Virtual Agents that Lie ( Extended

In order to deceive, agents need Theory of Mind capabilities (ToM), that is, the capability to model the others, and reason about the consequences of their actions and their implications in them. In this paper we provide a model for deceptive agents that use a theory of mind with N levels. We then present a case study that was used to compare deceptive agents with one level and with two levels ...

متن کامل

Deceptive Agents and Language ∗ ( Extended

The use of virtual agents in training requires them to have several human-like characteristics; one of these is the ability to appear deceptive. We take work from the psychology literature on cues to deception, with a focus on languagerelated cues, and examine whether it is possible to use resources from the field of Language Technology to construct scenarios with agents showing cues to decepti...

متن کامل

The Great Deceivers: Virtual Agents and Believable Lies

This paper proposes a model giving Theory of Mind (ToM) capabilities to artificial agents to allow them to carry out deceptive behaviours. It describes a model supporting an N-level Theory of Mind and reports a study to assess whether equipping agents with a two-level ToM results in them being perceived as more socially intelligent than agents with a singlelevel ToM. A deception game being deve...

متن کامل

The Role of Trust and Deception in Virtual Societies

In hybrid situations where artificial agents and human agents interact, the artificial agents must be able to reason about the trustworthiness and deceptive actions of their human counterpart. Thus a theory of trust and deception is needed that will support interactions between agents in virtual societies. There are several theories on trust (fewer on deception!), but none that deals specifical...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012